Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
CNN model compression based on activation-entropy based layer-wise iterative pruning strategy
CHEN Chengjun, MAO Yingchi, WANG Yichao
Journal of Computer Applications    2020, 40 (5): 1260-1265.   DOI: 10.11772/j.issn.1001-9081.2019111977
Abstract311)      PDF (718KB)(382)       Save

Since the existing pruning strategies of the Convolutional Neural Network (CNN) model are different and have general effects, an Activation-Entropy based Layer-wise Iterative Pruning (AE-LIP) strategy was proposed to reduce the parameter amount of the model while ensuring the accuracy of the model within a controllable range. Firstly, combined with the neuronal activation value and information entropy, a weight evaluation criteria based on activation-entropy was constructed, and the weight importance score was calculated. Secondly, the pruning was performed layer by layer, the weights were sorted according to the importance score, and the pruning number in each layer was combined to filter out the weights to be pruned and set them to zero. Finally, the model was fine-tuned, and the above process was repeated until the iteration ended. The experimental results show that the activation-entropy based layer-wise iterative pruning strategy makes the AlexNet model compressed 87.5%, and the corresponding accuracy is reduced by 2.12 percentage points, which is 1.54 percentage points higher than that of the magnitude-based weight pruning strategy and 0.91 percentage points higher than that of the correlation-based weight pruning strategy; the strategy makes VGG-16 model compressed 84.1%, and the corresponding accuracy is reduced by 2.62 percentage points, which is 0.62 and 0.27 percentage points higher than those of the two above strategies. It can be seen that the proposed strategy reduces the size of the CNN model effectively while ensuring the accuracy of the model, and is helpful for the deployment of CNN model on mobile devices with limited storage.

Reference | Related Articles | Metrics